# Complex problem solving
Deepseek Qwen Bllossom 32B
MIT
DeepSeek-qwen-Bllossom-32B is built upon the DeepSeek-R1-Distill-Qwen-32B model, aiming to enhance reasoning performance in Korean environments.
Large Language Model
Transformers Supports Multiple Languages

D
UNIVA-Bllossom
167
3
Marco O1
Apache-2.0
Marco-o1 is an open reasoning model focused on open-ended solutions, enhancing complex problem-solving capabilities through chain-of-thought fine-tuning, Monte Carlo tree search, and reflection mechanisms.
Large Language Model
Transformers

M
AIDC-AI
5,007
715
Einstein V6.1 Llama3 8B
Other
A large language model fine-tuned on diverse scientific datasets based on Meta-Llama-3-8B, specializing in STEM domain tasks
Large Language Model
Transformers English

E
Weyaxi
70
67
Strangemerges 53 7B Model Stock
Apache-2.0
StrangeMerges_53-7B-model_stock is the result of merging multiple 7B-parameter models using LazyMergekit, possessing powerful text generation capabilities.
Large Language Model
Transformers

S
Gille
18
1
Minotaur 13b Fixed
Apache-2.0
Minotaur 13B is an instruction fine-tuned model based on LlaMA-13B, using entirely open-source datasets for fine-tuning to ensure reproducibility.
Large Language Model
Transformers

M
openaccess-ai-collective
121
16
Featured Recommended AI Models